Aspect sentiment triplet extraction (ASTE) aims to extract aspect term, sentiment and opinion term triplets from sentences. Since the initial datasets used to evaluate models on ASTE had flaws, several studies later corrected the initial datasets and released new versions of the datasets independently. As a result, different studies select different versions of datasets to evaluate their methods, which makes ASTE-related works hard to follow. In this paper, we analyze the relation between different versions of datasets and suggest that the entire-space version should be used for ASTE. Besides the sentences containing triplets and the triplets in the sentences, the entire-space version additionally includes the sentences without triplets and the aspect terms which do not belong to any triplets. Hence, the entire-space version is consistent with real-world scenarios and evaluating models on the entire-space version can better reflect the models' performance in real-world scenarios. In addition, experimental results show that evaluating models on non-entire-space datasets inflates the performance of existing models and models trained on the entire-space version can obtain better performance.
translated by 谷歌翻译
Video representation learning has been successful in video-text pre-training for zero-shot transfer, where each sentence is trained to be close to the paired video clips in a common feature space. For long videos, given a paragraph of description where the sentences describe different segments of the video, by matching all sentence-clip pairs, the paragraph and the full video are aligned implicitly. However, such unit-level similarity measure may ignore the global temporal context over a long time span, which inevitably limits the generalization ability. In this paper, we propose a contrastive learning framework TempCLR to compare the full video and the paragraph explicitly. As the video/paragraph is formulated as a sequence of clips/sentences, under the constraint of their temporal order, we use dynamic time warping to compute the minimum cumulative cost over sentence-clip pairs as the sequence-level distance. To explore the temporal dynamics, we break the consistency of temporal order by shuffling the video clips or sentences according to the temporal granularity. In this way, we obtain the representations for clips/sentences, which perceive the temporal information and thus facilitate the sequence alignment. In addition to pre-training on the video and paragraph, our approach can also generalize on the matching between different video instances. We evaluate our approach on video retrieval, action step localization, and few-shot action recognition, and achieve consistent performance gain over all three tasks. Detailed ablation studies are provided to justify the approach design.
translated by 谷歌翻译
本文介绍了端到端以任务为导向的对话(TOD)的本体学预验证的语言模型(OPAL)。与Chit-Chat对话模型不同,面向任务的对话模型至少满足两个特定于任务的模块:对话状态跟踪器(DST)和响应生成器(RG)。对话状态由域插槽值三元组成,它们被认为是用户搜索与域相关数据库的约束。带有带注释的对话状态的大规模面向任务的对话数据通常是无法访问的。它可以防止针对任务对话的审慎语言模型的开发。我们提出了一种简单而有效的预处理方法来减轻此问题,该方法由两个预审进阶段组成。第一阶段是在大规模上下文文本数据上预处理,其中文本的结构化信息是由信息提取工具提取的。为了弥合训练方法和下游任务之间的差距,我们设计了两个预训练的任务:类似于本体的三重恢复和下一文本生成,分别模拟了DST和RG。第二阶段是在TOD数据上微调验证的模型。实验结果表明,即使没有CAMREST676和MULTIWOZ基准的任何TOD数据,我们提出的方法即使没有任何TOD数据,我们提出的方法也可以提高竞争性能。
translated by 谷歌翻译
自主代理在Atari Games等专业领域取得了长足的进步。但是,他们通常在具有有限和手动构想的目标的孤立环境中学习Tabula Rasa,因此未能跨越各种任务和能力。受到人类如何不断学习和适应开放世界的启发,我们主张建立通才代理的三位一体:1)一个支持多种任务和目标的环境,2)多模式知识的大规模数据库和3个数据库)灵活且可扩展的代理体系结构。我们介绍了MinedoJo,这是一个建立在流行的Minecraft游戏上的新框架,该游戏具有模拟套件,其中包含数千种不同的开放式任务,以及带有Minecraft视频,教程,Wiki页面和论坛讨论的Internet规模知识库。使用Minedojo的数据,我们提出了一种新型的代理学习算法,该算法利用大型预训练的视频语言模型作为学习的奖励功能。我们的代理商能够解决以自由形式的语言指定的各种开放式任务,而无需任何手动设计的密集塑造奖励。我们开源的仿真套件和知识库(https://minedojo.org),以促进研究的研究,以通常具有能力的体现药物的目标。
translated by 谷歌翻译
随着预训练的语言模型的发展,对话理解(DU)已经看到了杰出的成功。但是,当前的DU方法通常为每个不同的DU任务采用独立模型,而无需考虑跨不同任务的共同知识。在本文中,我们提出了一个名为{\ em unidu}的统一的生成对话理解框架,以实现跨不同DU任务的有效信息交流。在这里,我们将所有DU任务重新制定为基于统一的立即生成模型范式。更重要的是,引入了一种新颖的模型多任务训练策略(MATS),以动态调整各种任务的权重,以根据每个任务的性质和可用数据在培训期间进行最佳知识共享。涵盖五个基本DU任务的十个DU数据集的实验表明,在所有任务上,提出的UNIDU框架在很大程度上优于特定于特定于任务精心设计的方法。 MATS还揭示了这些任务的知识共享结构。最后,Unidu在看不见的对话领域中获得了有希望的表现,显示了概括的巨大潜力。
translated by 谷歌翻译
最小化隐私泄漏,同时确保数据实用程序是隐私保留数据发布任务中数据持有者的关键问题。大多数现有研究仅涉及一种类型的数据和度假村,以实现一个模糊的方法,\例如,混淆或泛化,以实现隐私式实用权衡,这是保护现实生活的异构数据不足,并且难以捍卫 - 生长机器学习的推论攻击。这项工作在采用异构数据保护的泛化和混淆操作时,对隐私保留数据发布进行试验研究。为此,我们首先提出了新的隐私和实用程序量化措施,并制定了混合隐私保留数据模糊问题,以解释泛化和混淆的联合效力。然后,我们设计了一种名为HyobScure的新型混合保护机制,交叉迭代优化了在某种实用程序保证下的最大隐私保护的泛化和混淆操作。理论上还提供了迭代过程的收敛性和障碍的隐私泄漏。广泛的实验表明,在不同场景下面对各种推理攻击时,横冲气度显着优于各种最先进的基线方法。 HyoBScure还线性地缩放到数据大小,并使用不同的关键参数稳健行为。
translated by 谷歌翻译
已开发了网络车辆中动态地图融合的技术,以扩大感应范围并提高单个车辆的感应精度。本文提出了一个基于联合学习(FL)的动态地图融合框架,以达到高地图质量,尽管视野中的对象数量未知(FOV),各种感应和模型不确定性以及缺少用于在线学习的数据标签。这项工作的新颖性是三重的:(1)开发一个三阶段的融合方案,以有效地预测对象的数量并将多个局部地图融合到富达得分; (2)开发一种通过汇总模型参数分布的FL算法,该算法通过微型模型(即表示特征提取的表示网络)进行了; (3)开发一种知识蒸馏方法,以在数据标签不可用时生成FL培训标签。所提出的框架是在汽车学习(CARLA)模拟平台中实施的。提供了广泛的实验结果,以验证开发的MAP融合和FL方案的出色性能和鲁棒性。
translated by 谷歌翻译
边缘联合学习(FL)是一种新兴范式,它基于无线通信从分布式数据集中列出全局参数模型。本文提出了一个单位模量的空中计算(UMAircomp)框架,以便于高效的边缘联合学习,它同时通过模拟波束形成更新本地模型参数并更新全局模型参数。所提出的框架避免了复杂的基带信号处理,导致通信延迟和实现成本低。推导Umaircomp FL系统的培训损失界限,并提出了两个低复杂性大规模优化算法,称为惩罚交替最小化(PAM)和加速梯度投影(AGP),以最小化非凸起的非运动损耗绑定。仿真结果表明,与PAM算法的提议Umaircomp框架达到了模型参数估计,训练丢失和测试错误的较小均方误差。此外,具有AGP算法的提议Umaircomp框架实现了令人满意的性能,而与现有优化算法相比,通过幅度的序列降低了计算复杂性。最后,我们展示了Umaircomp在车辆到一般的自主驾驶仿真平台中的实现。发现自主驾驶任务对模型参数误差比其他任务更敏感,因为自主驱动的神经网络包含稀疏模型参数。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译